127 research outputs found

    Occlusion-Robust MVO: Multimotion Estimation Through Occlusion Via Motion Closure

    Full text link
    Visual motion estimation is an integral and well-studied challenge in autonomous navigation. Recent work has focused on addressing multimotion estimation, which is especially challenging in highly dynamic environments. Such environments not only comprise multiple, complex motions but also tend to exhibit significant occlusion. Previous work in object tracking focuses on maintaining the integrity of object tracks but usually relies on specific appearance-based descriptors or constrained motion models. These approaches are very effective in specific applications but do not generalize to the full multimotion estimation problem. This paper presents a pipeline for estimating multiple motions, including the camera egomotion, in the presence of occlusions. This approach uses an expressive motion prior to estimate the SE (3) trajectory of every motion in the scene, even during temporary occlusions, and identify the reappearance of motions through motion closure. The performance of this occlusion-robust multimotion visual odometry (MVO) pipeline is evaluated on real-world data and the Oxford Multimotion Dataset.Comment: To appear at the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). An earlier version of this work first appeared at the Long-term Human Motion Planning Workshop (ICRA 2019). 8 pages, 5 figures. Video available at https://www.youtube.com/watch?v=o_N71AA6FR

    Multimotion Visual Odometry (MVO)

    Full text link
    Visual motion estimation is a well-studied challenge in autonomous navigation. Recent work has focused on addressing multimotion estimation in highly dynamic environments. These environments not only comprise multiple, complex motions but also tend to exhibit significant occlusion. Estimating third-party motions simultaneously with the sensor egomotion is difficult because an object's observed motion consists of both its true motion and the sensor motion. Most previous works in multimotion estimation simplify this problem by relying on appearance-based object detection or application-specific motion constraints. These approaches are effective in specific applications and environments but do not generalize well to the full multimotion estimation problem (MEP). This paper presents Multimotion Visual Odometry (MVO), a multimotion estimation pipeline that estimates the full SE(3) trajectory of every motion in the scene, including the sensor egomotion, without relying on appearance-based information. MVO extends the traditional visual odometry (VO) pipeline with multimotion segmentation and tracking techniques. It uses physically founded motion priors to extrapolate motions through temporary occlusions and identify the reappearance of motions through motion closure. Evaluations on real-world data from the Oxford Multimotion Dataset (OMD) and the KITTI Vision Benchmark Suite demonstrate that MVO achieves good estimation accuracy compared to similar approaches and is applicable to a variety of multimotion estimation challenges.Comment: Under review for the International Journal of Robotics Research (IJRR), Manuscript #IJR-21-4311. 25 pages, 14 figures, 11 tables. Videos available at https://www.youtube.com/watch?v=mNj3s1nf-6A and https://www.youtube.com/playlist?list=PLbaQBz4TuPcxMIXKh5Q80s0N9ISezFcp

    The Oxford Multimotion Dataset: Multiple SE(3) Motions with Ground Truth

    Full text link
    Datasets advance research by posing challenging new problems and providing standardized methods of algorithm comparison. High-quality datasets exist for many important problems in robotics and computer vision, including egomotion estimation and motion/scene segmentation, but not for techniques that estimate every motion in a scene. Metric evaluation of these multimotion estimation techniques requires datasets consisting of multiple, complex motions that also contain ground truth for every moving body. The Oxford Multimotion Dataset provides a number of multimotion estimation problems of varying complexity. It includes both complex problems that challenge existing algorithms as well as a number of simpler problems to support development. These include observations from both static and dynamic sensors, a varying number of moving bodies, and a variety of different 3D motions. It also provides a number of experiments designed to isolate specific challenges of the multimotion problem, including rotation about the optical axis and occlusion. In total, the Oxford Multimotion Dataset contains over 110 minutes of multimotion data consisting of stereo and RGB-D camera images, IMU data, and Vicon ground-truth trajectories. The dataset culminates in a complex toy car segment representative of many challenging real-world scenarios. This paper describes each experiment with a focus on its relevance to the multimotion estimation problem.Comment: 8 Pages. 8 Figures. Video available at https://www.youtube.com/watch?v=zXaHEdiKxdA. Dataset available at https://robotic-esp.com/datasets

    Multimotion Visual Odometry (MVO): Simultaneous Estimation of Camera and Third-Party Motions

    Full text link
    Estimating motion from images is a well-studied problem in computer vision and robotics. Previous work has developed techniques to estimate the motion of a moving camera in a largely static environment (e.g., visual odometry) and to segment or track motions in a dynamic scene using known camera motions (e.g., multiple object tracking). It is more challenging to estimate the unknown motion of the camera and the dynamic scene simultaneously. Most previous work requires a priori object models (e.g., tracking-by-detection), motion constraints (e.g., planar motion), or fails to estimate the full SE(3) motions of the scene (e.g., scene flow). While these approaches work well in specific application domains, they are not generalizable to unconstrained motions. This paper extends the traditional visual odometry (VO) pipeline to estimate the full SE(3) motion of both a stereo/RGB-D camera and the dynamic scene. This multimotion visual odometry (MVO) pipeline requires no a priori knowledge of the environment or the dynamic objects. Its performance is evaluated on a real-world dynamic dataset with ground truth for all motions from a motion capture system.Comment: This updated manuscript corrects the experimental results published in the proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).. 8 Pages. 7 Figures. Video available at https://www.youtube.com/watch?v=84tXCJOlj0

    Detecting periodicity in experimental data using linear modeling techniques

    Get PDF
    Fourier spectral estimates and, to a lesser extent, the autocorrelation function are the primary tools to detect periodicities in experimental data in the physical and biological sciences. We propose a new method which is more reliable than traditional techniques, and is able to make clear identification of periodic behavior when traditional techniques do not. This technique is based on an information theoretic reduction of linear (autoregressive) models so that only the essential features of an autoregressive model are retained. These models we call reduced autoregressive models (RARM). The essential features of reduced autoregressive models include any periodicity present in the data. We provide theoretical and numerical evidence from both experimental and artificial data, to demonstrate that this technique will reliably detect periodicities if and only if they are present in the data. There are strong information theoretic arguments to support the statement that RARM detects periodicities if they are present. Surrogate data techniques are used to ensure the converse. Furthermore, our calculations demonstrate that RARM is more robust, more accurate, and more sensitive, than traditional spectral techniques.Comment: 10 pages (revtex) and 6 figures. To appear in Phys Rev E. Modified styl

    The Grizzly, December 2, 1988

    Get PDF
    No Longer Stoned by Administration: Charges Dropped • 145 Chickens of Chadwick Chain Check In • Letter: Cross Country Earns Kudos • Lantern Thrives at Fifty-five • Peace Hosts a Challenge • Happy Hanukkah! • Happenin\u27 Holidays • Hallelujah to Handel\u27s Messiah Performance • Hermann and Murphy Take Grizzly Reins • Crossroads Debuts • Ursinus Hoopsters\u27 Clutch Plays Lift Bears\u27 to Fast Start • \u27Mers Sunk by W.C. • Ursinus\u27 Lady Bears Riding 4-Game Win Streak • Ursinus Gymnasts Open Season at Navy • Bravo! Bravo! • Speth Sets Better Limit • Dean Nace Leads MBA Race • Outstanding Alumnae Address Whitians • Maintenance Maintains Ursinus • Final Exam Schedulehttps://digitalcommons.ursinus.edu/grizzlynews/1225/thumbnail.jp

    The Grizzly, November 17, 1989

    Get PDF
    Inspired Voices Speak Out Nationally • Appealing for Unborn Lives • Boorstin Speaks at U.C. • Letters: Pledging Under Siege; Grizzly Growls; Did Berman Ask You?; Doctors do it Right; Only Doug; Wipe Mud From Shoudt\u27s Face; Wrong!; GDI Promotes Disunity • Changing Dining Atmosphere • Save a Forest: Recycle! • Career Day • Running Bears Finish Strong • Grizzlies Downed by Devils • Ladies Finish Winning Season • Praise Hockey Team • Swimming Prospectives • Greek News • Stroke on A\u27Bears • Don\u27t Talk Dirty to Me • Top Ten Things Loved at U.C.https://digitalcommons.ursinus.edu/grizzlynews/1247/thumbnail.jp

    Carbon release from submarine seeps at the Costa Rica fore arc: implications for the volatile cycle at the Central America convergent margin

    Get PDF
    We report total dissolved inorganic carbon (DIC) abundances and isotope ratios, as well as helium isotope ratios (3He/4He), of cold seep fluids sampled at the Costa Rica fore arc in order to evaluate the extent of carbon loss from the submarine segment of the Central America convergent margin. Seep fluids were collected over a 12 month period at Mound 11, Mound 12, and Jaco Scar using copper tubing attached to submarine flux meters operating in continuous pumping mode. The fluids show minimum 3He/4He ratios of 1.3 RA (where RA is air 3He/4He), consistent with a small but discernable contribution of mantle-derived helium. At Mound 11, δ13C∑CO2 values between −23.9‰ and −11.6‰ indicate that DIC is predominantly derived from deep methanogenesis and is carried to the surface by fluids derived from sediments of the subducting slab. In contrast, at Mound 12, most of the ascending dissolved methane is oxidized due to lower flow rates, giving extremely low δ13C∑CO2 values ranging from −68.2‰ to −60.3‰. We estimate that the carbon flux (CO2 plus methane) through submarine fluid venting at the outer fore arc is 8.0 × 105 g C km−1 yr−1, which is virtually negligible compared to the total sedimentary carbon input to the margin and the output at the volcanic front. Unless there is a significant but hitherto unidentified carbon flux at the inner fore arc, the implication is that most of the carbon being subducted in Costa Rica must be transferred to the (deeper) mantle, i.e., beyond the depth of arc magma generation
    • …
    corecore